Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Attention fusion network based video super-resolution reconstruction
BIAN Pengcheng, ZHENG Zhonglong, LI Minglu, HE Yiran, WANG Tianxiang, ZHANG Dawei, CHEN Liyuan
Journal of Computer Applications    2021, 41 (4): 1012-1019.   DOI: 10.11772/j.issn.1001-9081.2020081292
Abstract392)      PDF (2359KB)(751)       Save
Video super-resolution methods based on deep learning mainly focus on the inter-frame and intra-frame spatio-temporal relationships in the video, but previous methods have many shortcomings in the feature alignment and fusion of video frames, such as inaccurate motion information estimation and insufficient feature fusion. Aiming at these problems, a video super-resolution model based on Attention Fusion Network(AFN) was constructed with the use of the back-projection principle and the combination of multiple attention mechanisms and fusion strategies. Firstly, at the feature extraction stage, in order to deal with multiple motions between neighbor frames and reference frame, the back-projection architecture was used to obtain the error feedback of motion information. Then, a temporal, spatial and channel attention fusion module was used to perform the multi-dimensional feature mining and fusion. Finally, at the reconstruction stage, the obtained high-dimensional features were convoluted to reconstruct high-resolution video frames. By learning different weights of features within and between video frames, the correlations between video frames were fully explored, and an iterative network structure was adopted to process the extracted features gradually from coarse to fine. Experimental results on two public benchmark datasets show that AFN can effectively process videos with multiple motions and occlusions, and achieves significant improvements in quantitative indicators compared to some mainstream methods. For instance, for 4-times reconstruction task, the Peak Signal-to-Noise Ratio(PSNR) of the frame reconstructed by AFN is 13.2% higher than that of Frame Recurrent Video Super-Resolution network(FRVSR) on Vid4 dataset and 15.3% higher than that of Video Super-Resolution network using Dynamic Upsampling Filter(VSR-DUF) on SPMCS dataset.
Reference | Related Articles | Metrics
Hash learning based malicious SQL detection
LI Mingwei, JIANG Qingyuan, XIE Yinpeng, HE Jindong, WU Dan
Journal of Computer Applications    2021, 41 (1): 121-126.   DOI: 10.11772/j.issn.1001-9081.2020060967
Abstract303)      PDF (816KB)(516)       Save
To solve the high storage cost and low retrieval speed problems in malicious Structure Query Language (SQL) detection faced by Nearest Neighbor (NN) method, a Hash learning based Malicious SQL Detection (HMSD) method was proposed. In this algorithm, Hash learning was used to learn the binary coding representation for SQL statements. Firstly, the SQL statements were presented as real-valued features by washing and deleting the duplicated SQL statements. Secondly, the isotropic hashing was used to learn the binary coding representation for SQL statements. Lastly, the retrieval procedure was performed and the detection speed was improved by using binary coding representation. Experimental results show that on the malicious SQL detection dataset Wafamole, the dataset is randomly divided so that the training set contains 10 000 SQL statements and the test set contains 30 000 SQL statements, at the length of 128 bits, compared with nearest neighbor method, the proposed algorithm has the detection accuracy increased by 1.3%, the False Positive Rate (FPR) reduced by 0.19%,the False Negative Rate (FNR) decreased by 2.41%, the retrieval time reduced by 94%, the storage cost dropped by 97.5%; compared with support vector machine method, the proposed algorithm has the detection accuracy increased by 0.17%, which demonstrate that the proposed algorithm can solve the problems of nearest neighbor method in malicious SQL detection.
Reference | Related Articles | Metrics
Analysis of three-time-slot P-persistent CSMA protocol with variable collision duration in wireless sensor network
LI Mingliang, DING Hongwei, LI Bo, WANG Liqing, BAO Liyong
Journal of Computer Applications    2020, 40 (7): 2038-2045.   DOI: 10.11772/j.issn.1001-9081.2019112028
Abstract298)      PDF (4238KB)(243)       Save
Random multiple access communication is an indispensable part of computer communication research. A three-slot P-Persistent Carrier Sense Multiple Access (P-CSMA) protocol with variable collision duration in Wireless Sensor Network (WSN) was proposed to solve the problem of traditional P-CSMA protocol in transmitting and controlling WSN and energy consumption of system. In this protocol, the collision duration was added to the traditional two-time-slot P-CSMA protocol in order to change the system model to three-time-slot model, that is, the duration of information packet being sent successfully, the duration of packet collision and the idle duration of the system.Through the modeling, the throughput, collision rate and idle rate of the system under this model were analyzed. It was found that by changing the collision duration, the loss of the system was reduced. Compared with the traditional P-CSMA protocol, this protocol makes the system performance improved, and makes the lifetime of the system nodes obtained based on the battery model obviously extended. Through the analysis, the system simulation flowchart of this protocol is obtained. Finally, by comparing and analyzing the theoretical values and simulation values of different indexes, the correctness of the theoretical derivation is proved.
Reference | Related Articles | Metrics
Automatic annotation of visual deep neural network
LI Ming, GUO Chenhao, CHEN Xing
Journal of Computer Applications    2020, 40 (6): 1593-1600.   DOI: 10.11772/j.issn.1001-9081.2019101774
Abstract304)      PDF (3594KB)(340)       Save
Focused on the issue that developers cannot quickly figure out the models they need from various models, an automatic annotation method of visual deep neural network based on natural language processing technology was proposed. Firstly, the field categories of visual neural networks were divided, the keywords and corresponding weights were calculated according to the word frequency and other information. Secondly, a keyword extractor was established to extract keywords from paper abstracts. Finally, the similarities between extracted keywords and the known weights were calculated in order to obtain the application fields of a specific model. With experimental data derived from the papers published in three top international conferences of computer vision: IEEE International Conference on Computer Vision(ICCV), IEEE Conference on Computer Vision and Pattern Recognition(CVPR) and European Conference on Computer Vision(ECCV), the experiments were carried out. The experimental results indicate that the proposed method provides highly accurate classification results with a macro average value of 0.89. The validity of this proposed method is verified.
Reference | Related Articles | Metrics
Heterogeneous directional sensor node scheduling algorithm for differentiated coverage
LI Ming, HU Jiangping, CAO Xiaoli, PENG Peng
Journal of Computer Applications    2020, 40 (12): 3563-3570.   DOI: 10.11772/j.issn.1001-9081.2020050696
Abstract308)      PDF (986KB)(314)       Save
In order to prolong the lifespan of heterogeneous directional sensor network, a node scheduling algorithm based on Enhanced Coral Reef Optimization algorithm (ECRO) and with different monitoring requirements for different monitoring targets was proposed. ECRO was utilized to divide the sensor set into multiple sets satisfying the coverage requirements, so that the network lifespan was able to be prolonged by the scheduling among sets. The improvement of Coral Reef Optimization algorithm (CRO) was reflected in four aspects. Firstly, the migration operation in biogeography-based optimization algorithm was introduced into the brooding of coral reef to preserve the excellent solutions of the original population. Secondly, the differential mutation operator with chaotic parameter was adopted in brooding to enhance the optimization ability of the offspring. Thirdly, a random reverse learning strategy were performed on the worst individual of population in order to improve the diversity of population. Forthly, by combining CRO and simulated annealing algorithm, the local searching capability of algorithm was increased. Extensive simulation experiments on both numerical benchmark functions and node scheduling were conducted. The results of numerical test show that, compared with genetic algorithm, simulated annealing algorithm, differential evolution algorithm and the improved differential evolution algorithm, ECRO has better optimization ability. The results of sensor network node scheduling show that, compared with greedy algorithm, the Learning Automata Differential Evolution (LADE) algorithm, the original CRO, ECRO has the network lifespan improved by 53.8%, 19.0% and 26.6% respectively, which demonstrates the effectiveness of the proposed algorithm.
Reference | Related Articles | Metrics
Real-time facial expression recognition based on convolutional neural network with multi-scale kernel feature
LI Minze, LI Xiaoxia, WANG Xueyuan, SUN Wei
Journal of Computer Applications    2019, 39 (9): 2568-2574.   DOI: 10.11772/j.issn.1001-9081.2019030540
Abstract780)      PDF (1097KB)(494)       Save

Aiming at the problems of insufficient generalization ability, poor stability and difficulty in meeting the real-time requirement of facial expression recognition, a real-time facial expression recognition method based on multi-scale kernel feature convolutional neural network was proposed. Firstly, an improved MSSD (MobileNet+Single Shot multiBox Detector) lightweight face detection network was proposed, and the detected face coordinates information was tracked by Kernel Correlation Filter (KCF) model to improve the detection speed and stability. Then, three linear bottlenecks of three different scale convolution kernels were used to form three branches. The multi-scale kernel convolution unit was formed by the feature fusion of channel combination, and the diversity feature was used to improve the accuracy of expression recognition. Finally, in order to improve the generalization ability of the model and prevent over-fitting, different linear transformation methods were used for data enhancement to augment the dataset, and the model trained on the FER-2013 facial expression dataset was migrated to the small sample CK+ dataset for retraining. The experimental results show that the recognition rate of the proposed method on the FER-2013 dataset reaches 73.0%, which is 1.8% higher than that of the Kaggle Expression Recognition Challenge champion, and the recognition rate of the proposed method on the CK+ dataset reaches 99.5%. For 640×480 video, the face detection speed of the proposed method reaches 158 frames per second, which is 6.3 times of that of the mainstream face detection network MTCNN (MultiTask Cascaded Convolutional Neural Network). At the same time, the overall speed of face detection and expression recognition of the proposed method reaches 78 frames per second. It can be seen that the proposed method can achieve fast and accurate facial expression recognition.

Reference | Related Articles | Metrics
Ship tracking and recognition based on Darknet network and YOLOv3 algorithm
LIU Bo, WANG Shengzheng, ZHAO Jiansen, LI Mingfeng
Journal of Computer Applications    2019, 39 (6): 1663-1668.   DOI: 10.11772/j.issn.1001-9081.2018102190
Abstract1109)      PDF (1018KB)(642)       Save
Aiming at the problems of low utilization rate, high error rate, no recognition ability and manual participation in video surveillance processing in coastal and inland waters of China, a new ship tracking and recognition method based on Darknet network model and YOLOv3 algorithm was proposed to realize ship tracking and real-time detection and recognition of ship types, solving the problem of ship tracking and recognition in important monitored waters. In the Darknet network of the proposed method, the idea of residual network was introduced, the cross-layer jump connection was used to increase the depth of the network, and the ship depth feature matrix was constructed to extract advanced ship features for combination learning and obtaining the ship feature map. On the above basis, YOLOv3 algorithm was introduced to realize target prediction based on image global information, and target region prediction and target class prediction were integrated into a single neural network model. Punishment mechanism was added to improve the ship feature difference between frames. By using logistic regression layer for binary classification prediction, target tracking and recognition was able to be realized quickly with high accuracy. The experimental results show that, the proposed algorithm achieves an average recognition accuracy of 89.5% with the speed of 30 frame/s; compared with traditional and deep learning algorithms, it not only has better real-time performance and accuracy, but also has better robustness to various environmental changes, and can recognize the types and important parts of various ships.
Reference | Related Articles | Metrics
Rapid stable detection of human faces in image sequence based on MS-KCF model
YE Yuanzheng, LI Xiaoxia, LI Minze
Journal of Computer Applications    2018, 38 (8): 2192-2197.   DOI: 10.11772/j.issn.1001-9081.2018020363
Abstract701)      PDF (1139KB)(593)       Save
In order to quickly and stably detect the faces with large change of angle and serious occlusion in image sequence, a new automatic Detection-Tracking-Detection (DTD) model was proposed by combining the fast and accurate target detection model MobileNet-SSD (MS) and the fast tracking model Kernel Correlation Filtering (KCF), namely MS-KCF face detection model. Firstly, the face was detected quickly and accurately by using MS model, and the tracking model was updated. Secondly, the detected face coordinate information was input into the KCF tracking model to track steadily, and the overall detection speed was accelerated. Finally, to prevent tracking loss, the detection model was updated again after tracking several frames, then the face was detected again. The recall of MS-KCF model in the FDDB face detection benchmark was 93.60%; the recall in Easy, Medium and Hard data sets of WIDER FACE benchmark were 93.11%, 92.18% and 82.97%, respectively; the average speed was 193 frames per second. Experimental results show that the MS-KCF model is stable and fast, which has a good detection effect on the faces with serious shadows and large angle changes.
Reference | Related Articles | Metrics
Influence maximization algorithm based on structure hole and degree discount
LI Minjia, XU Guoyan, ZHU Shuai, ZHANG Wangjuan
Journal of Computer Applications    2018, 38 (12): 3419-3424.   DOI: 10.11772/j.issn.1001-9081.2018040920
Abstract526)      PDF (894KB)(411)       Save
The existing Influence Maximization (IM) algorithms of social network have the problem of low influence range caused by only selecting local optimal nodes at present. In order to solve the problem, considering the propagation advantage of core node and structure hole node, a maximization algorithm based on Structure Hole and Degree Discount (SHDD) was proposd. Firstly, the ideas of structure hole and centrality degree were integrated and applied to the influence maximization problem, and the factor α combining the structure hole node and the core node was found out to play the maximum propagation function, which made the information spread more widely to increase the influence of the whole network. Then, in order to highlight the advantages of the integration of two ideas, the influence of second-degree neighbor was added to the evaluation criteria of structure hole to select the structure hole node. The experimental results on data sets of different scales show that, compared with DegreeDiscount algorithm, SHDD can increase the influence range without consuming too much time, and compared with the Structure-based Greedy (SG) algorithm, SHDD can expand the influence range and reduce the time cost in the network with large clustering coefficient. The proposed SHDD algorithm can maximize the advantages of structure hole node and core node fusion when factor α is 0.6, and it can expand the influence range more steadily in the social network with large clustering coefficient.
Reference | Related Articles | Metrics
Intrusion detection model based on hybrid convolutional neural network and recurrent neural network
FANG Yuan, LI Ming, WANG Ping, JIANG Xinghe, ZHANG Xinming
Journal of Computer Applications    2018, 38 (10): 2903-2907.   DOI: 10.11772/j.issn.1001-9081.2018030710
Abstract1161)      PDF (918KB)(854)       Save
Aiming at the problem of advanced persistent threats in power information networks, a hybrid Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) intrusion detection model was proposed, by which current network states were classified according to various statistical characteristics of network traffic. Firstly, pre-processing works such as feature encoding and normalization were performed on the network traffic obtained from log files. Secondly, spatial correlation features between different hosts' intrusion traffic were extracted by using deformable convolution kernels in CNN. Finally, the processed data containing spatial correlation features were staggered in time, and the temporal correlation features of the intrusion traffic were mined by RNN. The experimental results showed that the Area Under Curve (AUC) of the model was increased by 7.5% to 14.0% compared to traditional machine learning models, and the false positive rate was reduced by 83.7% to 52.7%. It indicates that the proposed model can accurately identify the type of network traffic and significantly reduce the false positive rate.
Reference | Related Articles | Metrics
Public auditing scheme of data integrity for public cloud
MIAO Junmin, FENG Chaosheng, LI Min, LIU Xia
Journal of Computer Applications    2018, 38 (10): 2892-2898.   DOI: 10.11772/j.issn.1001-9081.2018030510
Abstract518)      PDF (1067KB)(368)       Save
Aimming at the problem of leaking privacy to Third-Party Auditors (TPA) and initiating alternative attacks by Cloud Storage Server (CSS) in public auditing, a new public auditing scheme of data integrity for public cloud was proposed. Firstly, the hash value obfuscation method was used to obfuscate the evidence returned by the cloud storage server to prevent TPA from analyzing and calculating the original data. Then during the audit process, TPA itself calculated the overlay tree of the Merkle Hash Tree (MHT) corresponding to the challenge request, and matched with the overlay tree returned by CSS to prevent the cloud storage server from responding to audit challenges with other existing data. Experimental results show that the performance in terms of computational overhead, storage overhead and communication overhead does not change by orders of magnitude after solving the privacy and attack problems of the existing scheme.
Reference | Related Articles | Metrics
Image denoising model with adaptive non-local data-fidelity term and bilateral total variation
GUO Li, LIAO Yu, LI Min, YUAN Hailin, LI Jun
Journal of Computer Applications    2017, 37 (8): 2334-2342.   DOI: 10.11772/j.issn.1001-9081.2017.08.2334
Abstract578)      PDF (1659KB)(657)       Save
Aiming at the problems of over-smoothing, singular structure residual noise, contrast loss and stair effect of common denoising methods, an image denoising model with adaptive non-local data fidelity and bilateral total variation regularization was proposed, which provides an adaptive non-local regularization energy function and the corresponding variation framework. Firstly, the data fidelity term was obtained by non-local means filter with adaptive weighting method. Secondly, the bilateral total variation regularization was introduced in this framework, and a regularization factor was used to balance the data fidelity term and the regularization term. At last, the optimal solutions for different noise statistics were obtained by minimizing the energy function, so as to achieve the purpose of reducing residual noise and correcting excessive smoothing. The theoretical analysis and simulation results on simulated noise images and real noise images show that the proposed image denoising model can deal with different statistical noise in image, and the Peak-Signal-to-Noise Ratio (PSNR) of it can be increased by up to 0.6 dB when compared with the adaptive non-local means filter; when compared with the total variation regularization algorithm, the subjective visual effect of the proposed model was improved obviously and the details of image texture and edges was protected very well when denoising, and the PSNR was increased by up to 10 dB, the Multi-Scale Structural Similarity index (MS-SSIM) was increased by 0.3. Therefore, the proposed denoising model can theoretically better deal with the noise and the high frequency detail information of the image, and has good practical application value in the fields of video and image resolution enhancement.
Reference | Related Articles | Metrics
Fine-grained scheduling policy based on erasure code
LIAO Hui, XUE Guangtao, QIAN Shiyou, LI Minglu
Journal of Computer Applications    2017, 37 (3): 613-619.   DOI: 10.11772/j.issn.1001-9081.2017.03.613
Abstract581)      PDF (1208KB)(546)       Save
Aiming at the problems of long data acquisition delay and unstable data download in cloud storage system, a scheduling scheme based on storage node load information and erasure code technique was proposed. Firstly, erasure code was utilized to improve the delay performance of data retrieving in cloud storage, and parallel threads were used to download multiple data copies simultaneously. Secondly, a lot of load information about storage nodes was analyzed to figure out which performance indicators would affect delay performance, and a new scheduling algorithm was proposed based on load information. Finally, the open-source project OpenStack was used to build a real cloud computing platform to test algorithm performance based on real user request tracing and erasure coding. A large number of experiments show that the proposed scheme not only can achieve 15% lower average delay but also reduce 40% volatility of delay compared with other scheduling policies. It proves that the scheduling policy can effectively improve delay performance and stability of data retrieving in real cloud computing platform, achieving a better user experience.
Reference | Related Articles | Metrics
Fully homomorphic encryption scheme without Gaussian noise
LI Mingxiang, LIU Zhao, ZHANG Mingyan
Journal of Computer Applications    2017, 37 (12): 3430-3434.   DOI: 10.11772/j.issn.1001-9081.2017.12.3430
Abstract598)      PDF (747KB)(760)       Save
Much lately, a leveled fully homomorphic encryption scheme was proposed based on the Learning With Rounding (LWR) problem. The LWR problem is a variant of the Learning With Errors (LWE) problem, but it dispenses with the costly Gaussian noise sampling. Thus, compared with the existing LWE-based fully homomorphic encryption schemes, the proposed LWR-based fully homomorphic encryption scheme has much higher efficiency. But then, the user's evaluation key was needed to be obtained in the homomorphic evaluator of the proposed LWR-based fully homomorphic encryption scheme. Accordingly, a new leveled fully homomorphic encryption scheme was constructed based on the LWR problem, and the user's evaluation key was not needed to be obtained in the homomorphic evaluator of the new fully homomorphic encryption scheme. Since the new proposed fully homomorphic encryption scheme can be used to construct the schemes such as identity-based fully homomorphic encryption schemes, and attribute-based fully homomorphic encryption schemes, the new proposed scheme has wider application than the lately proposed LWR-based fully homomorphic encryption scheme.
Reference | Related Articles | Metrics
Modeling and simulating thermotaxis behavior of Caenorhabditis elegans based on artificial neural network
LI Mingxu, DENG Xin, WANG Jin, WANG Xiao, ZHANG Xiaomou
Journal of Computer Applications    2016, 36 (7): 1909-1913.   DOI: 10.11772/j.issn.1001-9081.2016.07.1909
Abstract636)      PDF (771KB)(412)       Save
To research the thermal behavior of Caenorhabditis elegans (C.elegans), a new method was proposed to model and simulate the thermal behavior of C.elegans based on the artificial neural network. Firstly, the motion model of the nematode was established. Then, a nonlinear function was designed to approximate the movement logic of the thermotaxis of the nematode. Thirdly, the speed and the orientation change capabilities were implemented, and these capabilities had been realized by the artificial neural network. Finally, the experimental simulation was carried out in the Matlab environment, and the thermal behavior of the nematode was simulated. The experimental results show that Back Propagation (BP) neural network can simulate the thermal behavior of C.elegans better than Radical Basis Function (RBF) neural network. The experimental results also demonstrate that the proposed method can successfully model the thermal behavior of C.elegans, and reveal the essence of the thermotaxis of C.elegans to some extent, which theoretically supports the research on the thermotaxis of the crawling robot.
Reference | Related Articles | Metrics
Automatic segmentation of glomerular basement membrane based on image patch matching
LI Chuangquan, LU Yanmeng, LI Mu, LI Mingqiang, LI Ran, CAO Lei
Journal of Computer Applications    2016, 36 (11): 3201-3206.   DOI: 10.11772/j.issn.1001-9081.2016.11.3201
Abstract665)      PDF (1089KB)(428)       Save
An automatic segmentation method based on image patch matching strategy was proposed to realize the automatic segmentation of glomerular basement membrane automatically. First of all, according to the characteristics of the glomerular basement membrane, the search range was extended from a reference image to multiple reference images, and an improved searching method was adopted to improve matching efficiency. Then,the optimal patches were searched out and the label image patches corresponding to the optimal patches were extracted, which were weighted by matching similarity. Finally, the weighted label patches were rearranged as the initial segmentation of glomerular basement membrane, from which the final segmentation could be obtained after morphological processing. On the glomerular Transmission Electron Microscopy (TEM) dataset, the Jaccard coefficient is between 83% and 95%. The experimental results show that the proposed method can achieve higher accuracy.
Reference | Related Articles | Metrics
Parameter optimization model of interval concept lattice based on compression theory
LI Mingxia, LIU Baoxiang, ZHANG Chunying
Journal of Computer Applications    2016, 36 (11): 2945-2949.   DOI: 10.11772/j.issn.1001-9081.2016.11.2945
Abstract686)      PDF (910KB)(587)       Save
Before building interval concept lattice from the formal context, the interval parameters[ α, β] should be determined, which influence the concept extension, the lattice structure and the quantity and precision of extracted association rules. In order to obtain α and β with the biggest compression degree of interval concept lattice, firstly the definition of the similarity of binary relation pairs and covering-neighborhood-space from formal context were proposed, the similarity matrix of binary relation pairs was obtained, and the neighborhood of binary relation pairs was calculated by the covering which was obtained by similar class of γ. Secondly, update algorithm of concept sets based on change of parameters was raised, where concept sets were got on the basis of the non-reconstruction. Combining with covering-neighborhood of binary relation pairs on changing interval parameters, further the model of parameter optimization of interval concept lattice could be built based on compression theory. According to the size of the compression degree and its changing trend, the optimal values of interval parameters were found. Finally, the validity of the model was demonstrated by an example.
Reference | Related Articles | Metrics
Bilinear image similarity matching algorithm based on deep feature analysis
LI Ming, ZHANG Hong
Journal of Computer Applications    2016, 36 (10): 2822-2825.   DOI: 10.11772/j.issn.1001-9081.2016.10.2822
Abstract554)      PDF (770KB)(579)       Save
Content-based image retrieval has being faced the problem of "semantic gap", feature selection has a direct influence on semantic learning results; while traditional distance metric often calculates the similarity from a single perspective, which cannot well express the similarity between images. To resolve the above problem, a bilinear image similarity matching algorithm based on deep feature analysis was proposed. First, the image dataset was fine-tuning trained on the Convolutional Neural Network (CNN) model, then the image features were extracted by using the trained CNN. After getting the output features of the full connection layer, the image similarity was calculated by the bilinear similarity matching algorithm, and the most similar image instance was returned after sorting the similarity. Experimental results on Caltech101 and Caltech 256 datasets show that compared with the contrast algorithms, the proposed algorithm can get higher mean average precision, Top K precision and recall, which demonstrates the effectiveness of the proposed algorithm.
Reference | Related Articles | Metrics
Node localization based on improved flooding broadcast and particle filtering in wireless sensor network
ZHAO Haijun, CUI Mengtian, LI Mingdong, LI Jia
Journal of Computer Applications    2016, 36 (10): 2659-2663.   DOI: 10.11772/j.issn.1001-9081.2016.10.2659
Abstract385)      PDF (899KB)(449)       Save
Aiming at the shortage of current mobile Wireless Sensor Network (WSN) localization, a localization algorithm based on improved flooding broadcast mechanism and particle filtering was proposed. For a given unknown node, firstly, by the improved flooding broadcast mechanism, the effective average hop distance of an unknown node from its closest anchor node was used to calculate the distances to its all neighbor nodes. Then a differential error correction scheme was devised to reduce the measurement error accumulated over multiple hops for the average hop distance. Secondly, the particle filter and the virtual anchor node were used to narrow the prediction area, and more effective particle prediction area was obtained so as to further decrease the estimation error of the position of unknown node. The simulation results show that compared with DV-Hop, Monte Carlo Baggio (MCB) and Range-based Monte Carlo Localization (MCL) algorithms, the proposed positioning algorithm can effectively inhibit the broadcast redundancy and reduce the message overhead related to the node localization, and can achieve higher-accuracy positioning performance with lower communication cost.
Reference | Related Articles | Metrics
Image classification method based on visual saliency detection
LIU Shangwang, LI Ming, HU Jianlan, CUI Yanmeng
Journal of Computer Applications    2015, 35 (9): 2629-2635.   DOI: 10.11772/j.issn.1001-9081.2015.09.2629
Abstract791)      PDF (1208KB)(425)       Save
To solve the problem that traditional image classification methods deal with the whole image in a non-hierarchical way, an image classification method based on visual saliency detection was proposed. Firstly, the visual attention model was employed to generate the salient region. Secondly, the texture feature and time signature feature of the image were extracted by Gabor filter and pulse coupled neural network, respectively. Finally, the support vector machine was adopted to accomplish image classification according to the features of the salient region. The experimental results show that the image classification precision rates of the proposed method in SIMPLIcity and Caltech are 94.26% and 95.43%, respectively. Obviously, saliency detection and efficient image feature extraction are significant to image classification.
Reference | Related Articles | Metrics
Hadoop big data processing system model based on context-queue under Internet of things
LI Min, NI Shaoquan, QIU Xiaoping, HUANG Qiang
Journal of Computer Applications    2015, 35 (5): 1267-1272.   DOI: 10.11772/j.issn.1001-9081.2015.05.1267
Abstract534)      PDF (911KB)(969)       Save

In order to solve problems that heterogeneous big data processing has low real-time response capability in Internet Of Things (IOT), data processinging and persistence schemes based on Hadoop were analyzed. A model of Hadoop big data processing system model based on "Context" named as HDS (Hadoop big Data processing System) was proposed. This model used Hadoop framework to complete data parallel process and persistence. Heterogeneous data were abstracted as "Context" which are the unified objects processed in HDS. Definitions of "Context Distance" and "Context Neighborhood System (CNS)" were proposed based on the "temporal-spatial" characteristics of "Context". "Context Queue (CQ)" was designed as an assistance storage so as to overcome defect of low real-time data processing response capability in Hadoop framework. Especially, based on temporal and spatial characteristics of context, optimization of task reorganizing in client requests CQ was introduced in detail. Finally, taken problem of vehicle scheduling in petroleum products distribution as an example, performance of data processing and real-time response capability were tested by MapReduce distributed parallel computing experiments. The experimental results show that compared with ordinary computing system SDS (Single Data processing System), HDS is not only of obviously excellence in big data processing capability but also can effectively overcome defect of low real-time data processing response of Hadoop. In 10-server experimental environment, the difference of data processinging capability between HDS and SDS is more than 200 times; the difference between HDS with and without assistance of CQ for real-time data processing response capability is more than 270 times.

Reference | Related Articles | Metrics
Skeleton-driven mesh deformation technology based on subdivision
ZHANG Xiangyu, LI Ming, MA Xiqing
Journal of Computer Applications    2015, 35 (3): 811-815.   DOI: 10.11772/j.issn.1001-9081.2015.03.811
Abstract566)      PDF (988KB)(417)       Save

To solve the problem of preserving detailed features of model about the traditional skeleton driven deformation, a method of subdivision-based skeleton-driven mesh deformation was proposed. Firstly,after that skeleton and control mesh were generated on deformed region, the relationship of between skeleton and control mesh, subdivision surface of control mesh and deformed region were established. Secondly, when the skeleton was modified according to the desired deformation result, the change information of the corresponding subdivision surface was transformed into the alteration of the mesh gradient field for Poisson. Some examples show that the deformation method for different mesh models could get better editing effects and preserve detailed features after the deformation effectively. Compared with the traditional skeleton-driven deformation method, it is proved to be easy to operate, and can be employed to preserve detailed features effectively. The method is suitable for editing the models with complex and rich geometric details.

Reference | Related Articles | Metrics
Distributedly-dynamic bandwidth allocation algorithm based on proportional-integral controller
ZHAO Haijun, LI Min, LI Mingdong, PU Bin
Journal of Computer Applications    2015, 35 (3): 615-619.   DOI: 10.11772/j.issn.1001-9081.2015.03.615
Abstract610)      PDF (763KB)(397)       Save

Aiming at the fair and efficient bandwidths allocation for the geographically distributed control systems, a distributed and dynamic bandwidth allocation algorithm was proposed.Firstly, the bandwidth allocation problem was formulated as a convex optimization problem, namely, the sum of utilities of all the control systems was maximized. Further, the idea of the distributed bandwidth allocation algorithm was adopted to make the control systems vary their sampling periods based on fed-back congestion information from the network, and get the maximum sampling rate or maximum transmission rate which could be used. Then the interaction between control systems and links was modelled as a time-delay dynamical system, and Proportional-Integral (PI) controller was used as the link queue controller to realize the algorithm; The simulation results show that the proposed bandwidth allocation algorithm can not only make the transmission rates of all plants converge to the value where all plants share the bandwidth equally in 10 seconds. At the same time, for the PI controller, its queue stabilizes around the desired set point of 50 packets, and can accurately and steadily track the input signal to maximize the performance of all control systems.

Reference | Related Articles | Metrics
Reliability modeling and analysis of embedded system hardware based on Copula function
GUO Rongzuo, FAN Xiangkui, CUI Dongxia, LI Ming
Journal of Computer Applications    2015, 35 (2): 550-554.   DOI: 10.11772/j.issn.1001-9081.2015.02.0550
Abstract526)      PDF (843KB)(360)       Save

The reliability of Embedded System Hardware (ESH) is very important, which is directly related to the quality and longevity of the embedded system. To analyze the reliability of ESH, it was studied on the perspective of hardware using Copula function. At first, abstract formalization of the ESH was defined from composition level. Then reliability modeling of each function module of the ESH was given by considering integration of hardware and software, as well as using Copulas function to establish the reliability model of ESH. Finally, the parameters of the proposed reliability model were estimated, and a specific calculation example by using this proposed model was put forward and compared with some other Copulas functions. The result shows that the proposed model using Copula function is effective.

Reference | Related Articles | Metrics
Trajectory segment-based abnormal behavior detection method using LDA model
ZHENG Bingbin, FAN Xinnan, LI Min, ZHANG Ji
Journal of Computer Applications    2015, 35 (2): 515-518.   DOI: 10.11772/j.issn.1001-9081.2015.02.0515
Abstract689)      PDF (830KB)(485)       Save

Most of the current trajectory-based abnormal behavior detection algorithms do not consider the internal information of the trajectory, which might lead to a high false alarm rate. An abnormal behavior detection method based on trajectory segment using the topic model was presented. Firstly, the original trajectories were partitioned into trajectory segments according to turning angles. Secondly, the behavior characteristic information was extracted by quantifying the observations from these segments into different visual words. Then the time-space relationship among the trajectories was explored by Latent Dirichlet Allocation (LDA) model. Finally, the behavior pattern analysis and the abnormal behavior detection could be implemented by learning the corresponding generative topic model combined with the Bayesian theory. Simulation experiments of behavior pattern analysis and abnormal behavior detection were conducted on two video scenes, and different kinds of abnormal behavior patterns were detected. The experimental results show that, combining with trajectory segmentation, the proposed method can dig the internal behavior characteristic information to identify a variety of abnormal behavior patterns and improve the accuracy of abnormal behavior detection.

Reference | Related Articles | Metrics
Applicability evaluating method of Dempster's combination rule for bodies of evidence with non-singleton elements
LIU Zhexi, YANG Jianhong, YANG Debin, LI Min, MIN Xianchun
Journal of Computer Applications    2015, 35 (2): 461-465.   DOI: 10.11772/j.issn.1001-9081.2015.02.0461
Abstract376)      PDF (737KB)(358)       Save

Aiming to overcome the problem that fuzzy or even inaccurate results will be concluded when evaluating the applicability of Dempster's combination rule while the bodies of evidence contain some non-singleton evidences which basic probability assignments have larger differences between any two bodies of evidence, a modified pignistic probability distance was proposed to describe the relevance between bodies of evidence. And then, combining the modified pignistic probability distance with the classical conflict coefficient, a new method of evaluating the applicability of Dempster's combination rule was presented. In the proposed method, a new conflict coefficient was defined to measure the conflict between bodies of evidence. The new conflict coefficient was consistent with the modified pignistic probability distance when the classical conflict coefficient was zero, and it was consistent with an average value of the modified pignistic probability distance and the classical conflict coefficient when the classical conflict coefficient was not zero. The results of the numerical analysis examples demonstrate that compared with the evaluating method based on the pignistic probability distance, the proposed method based on the improved pignistic probability distance can provide more applicable and reasonable evaluating results of the applicability of the Dempster's combination rule.

Reference | Related Articles | Metrics
Fault diagnosis method of high-speed rail based on compute unified device architecture
CHEN Zhi, LI Tianrui, LI Ming, YANG Yan
Journal of Computer Applications    2015, 35 (10): 2819-2823.   DOI: 10.11772/j.issn.1001-9081.2015.10.2819
Abstract409)      PDF (703KB)(406)       Save
Concerning the problem that traditional fault diagnosis of High-Speed Rail (HSR) vibration signal is slow and cannot meet the actual requirement of real-time processing, an accelerated fault diagnosis method for HSR vibration signal was proposed based on Compute Unified Device Architecture (CUDA). First, the data of HSR was processed by Empirical Mode Decomposition (EMD) based on CUDA, then the fuzzy entropy of each result component was calculated. Finally, K-Nearest Neighbor (KNN) classification algorithm was used to classify feature space which consisted of multiple fuzzy entropy features. The experimental results show that the proposed method is efficient on fault classification of HSR vibration signal, and the processing speed is significantly improved compared with the traditional method.
Reference | Related Articles | Metrics
Cascading invulnerability attack strategy of complex network via community detection
DING Chao YAO Hong DU Jun PENG Xingzhao LI Minhao
Journal of Computer Applications    2014, 34 (6): 1666-1670.  
Abstract217)      PDF (814KB)(499)       Save

In order to investigate the cascading invulnerability attack strategy of complex network via community detection, the initial load of the node was defined by the betweenness of the node and its neighbors, this defining method comprehensively considered the information of the nodes, and the load on the broken nodes were redistributed to its neighbors according to the local preferential probability. When the network being intentionally attacked based on community detection, the couple strength, the invulnerability of Watts-Strogatz (WS) network, Barabási-Albert (BA) network, Erds-Rényi (ER) network and World-Local (WL) network, as well as network with overlapping and non-overlapping community under differet attack strategies were studied. The results show that the network's cascading invulnerability is negatively related with couple strength; as to different types of networks, under the premise that fast division algorithm correctly detects community structure, the networks invulnerability is lowest when the node with largest betweenness was attacked; after detecting overlapping community using the Clique Percolation Method (CPM), the network invulnerability is lowest when the overlapping node with largest betweenness was attacked. It comes to conclusion that the network will be largest destoryed when using the attack strategy of complex network via community detection.

Reference | Related Articles | Metrics
Research on cascading invulnerability of community structure networks under intentional-attack
LI Minhao DU Jun PENG Xingzhao DING Chao
Journal of Computer Applications    2014, 34 (4): 935-938.   DOI: 10.11772/j.issn.1001-9081.2014.04.0935
Abstract419)      PDF (702KB)(421)       Save

In order to investigate the effects of community structure on cascading invulnerability, in the frame of a community structure network, the initial load of the node was defined by its betweenness, and the load on the broken node was redistributed to its neighboring nodes according to the preferential probability. When the node with the largest load being intentionally attacked in the network, the relation of load exponent, coupling-strength in a community, coupling-strength between communities, modularity function and the network's invulnerability were studied. The results show that the network's cascading invulnerability is positively related with coupling-strength in a community, coupling-strength between communities and modularity function, negatively related with load exponent. With comparison to BA (Barabási-Albert) scale-free network and WS (Watts-Strogatz) small-world networks, the result indicates that community structure lowers the network's cascading invulnerability, thus the more homogeneous betweenness distribution is, the stronger network's cascading invulnerability is.

Reference | Related Articles | Metrics
Solution method for inverse kinematics of virtual human's upper limb kinematic chain based on improved genetic algorithm
DENG Gangfeng HUANG Xianxiang GAO Qinhe ZHANG Zhili LI Min
Journal of Computer Applications    2014, 34 (1): 129-134.   DOI: 10.11772/j.issn.1001-9081.2014.01.0129
Abstract764)      PDF (1016KB)(678)       Save
An Improved Genetic Algorithm (IGA) was proposed for the inverse kinematics problem solution of upper limb kinematic chain which had high degree of freedom and was too complex to be solved by using geometric, algebraic, and iterative methods. First, the joint-units of upper limb kinematic chain and its mathematical modeling were constructed by using Denavit-Hartenberg (D-H) method, then population diversity and initialization were completed based on simulating human being population, and the adaptive operators for crossover and mutation were designed. The simulation results show that the IGA can search the high precise solutions and avoid prematurity convergence or inefficient searching in later stage with larger probability than standard genetic algorithm.
Related Articles | Metrics